fix(memory-plugin): multi-agent isolation for message counter#671
Open
yingriyanlong wants to merge 3 commits intovolcengine:mainfrom
Open
fix(memory-plugin): multi-agent isolation for message counter#671yingriyanlong wants to merge 3 commits intovolcengine:mainfrom
yingriyanlong wants to merge 3 commits intovolcengine:mainfrom
Conversation
added 3 commits
March 15, 2026 21:47
…context agentId When OpenClaw gateway serves multiple agents, each agent's before_agent_start and agent_end hooks now carry the agent's ID in the second parameter (PluginHookAgentContext). The plugin dynamically switches the client's agentId before each recall/capture operation, ensuring memories are routed to the correct agent_space (md5(user_id + agent_id)[:12]). Changes: - client.ts: Add setAgentId()/getAgentId() to allow dynamic agent switching. Clears cached runtimeIdentity and resolvedSpaceByScope when switching to ensure correct space derivation. - index.ts: Extract agentId from hook ctx (2nd param) in both before_agent_start and agent_end handlers. This is backward compatible: if ctx.agentId is absent (single-agent setup), the plugin falls back to the static config agentId as before.
…ary VLM token consumption
Problem:
The auto-capture feature triggers VLM extraction (deepseek-chat) for almost every
user message, including trivially short ones (as low as 4 CJK chars / 10 EN chars).
In multi-agent setups with high interaction volume, this leads to excessive API calls
and token consumption (observed 100K+ deepseek-chat calls in 2 days).
Solution:
Add a configurable `captureMinLength` option (default: 50 chars) that sets a minimum
sanitized text length threshold for triggering auto-capture. Messages shorter than this
threshold are skipped (reason: `length_out_of_range`), avoiding unnecessary VLM calls.
The new threshold works as a floor: `Math.max(resolveCaptureMinLength(text), captureMinLength)`,
preserving the existing CJK/EN-aware minimum while allowing users to set a higher bar.
Changes:
- config.ts: Add captureMinLength type, DEFAULT_CAPTURE_MIN_LENGTH=50, allowed key,
resolve logic (clamped to 1..1000), and UI hint
- text-utils.ts: Update getCaptureDecision signature to accept captureMinLength,
use Math.max to combine with built-in minimum
- index.ts: Pass cfg.captureMinLength to getCaptureDecision call
Users can override via plugin config:
{ "captureMinLength": 100 } // skip messages under 100 chars
…o support multi-agent isolation
|
Mac seems not to be a GitHub user. You need a GitHub account to be able to sign the CLA. If you have already a GitHub account, please add the email address used for this commit to your account. You have signed the CLA already but the status is still pending? Let us recheck it. |
Collaborator
|
you need to resolve the conflicts |
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Fixes a bug where multiple agents shared the same lastProcessedMsgCount global variable, leading to redundant memory extractions and high token consumption in multi-agent environments.